Conference Proceedings

Does Representational Fairness Imply Empirical Fairness?

A Shen, X Han, T Cohn, T Baldwin, L Frermann

2nd Conference of the Asia Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing Findings of the Association for Computational Linguistics Aacl Ijcnlp 2022 | Published : 2022

Abstract

NLP technologies can cause unintended harmsif learned representations encode sensitive attributes of the author, or predictions systematically vary in quality across groups. Populardebiasing approaches, like adversarial training,remove sensitive information from representations in order to reduce disparate performance,however the relation between representationalfairness and empirical (performance) fairnesshas not been systematically studied. This paperfills this gap, and proposes a novel debiasingmethod building on contrastive learning to encourage a latent space that separates instancesbased on target label, while mixing instancesthat share protected attributes. Our results showthe effecti..

View full abstract